Causal Categorization with Bayes Nets

نویسنده

  • Bob Rehder
چکیده

A theory of categorization is presented in which knowledge of causal relationships between category features is represented as a Bayesian network. Referred to as causal-model theory, this theory predicts that objects are classified as category members to the extent they are likely to have been produced by a categorys causal model. On this view, people have models of the world that lead them to expect a certain distribution of features in category members (e.g., correlations between feature pairs that are directly connected by causal relationships), and consider exemplars good category members when they manifest those expectations. These expectations include sensitivity to higher-order feature interactions that emerge from the asymmetries inherent in causal relationships. Research on the topic of categorization has traditionally focused on the problem of learning new categories given observations of category members. In contrast, the theory-based view of categories emphasizes the influence of the prior theoretical knowledge that learners often contribute to their representations of categories [1]. However, in contrast to models accounting for the effects of empirical observations, there have been few models developed to account for the effects of prior knowledge. The purpose of this article is to present a model of categorization referred to as causal-model theory or CMT [2, 3]. According to CMT, people 's know ledge of many categories includes not only features, but also an explicit representation of the causal mechanisms that people believe link the features of many categories. In this article I apply CMT to the problem of establishing objects category membership. In the psychological literature one standard view of categorization is that objects are placed in a category to the extent they have features that have often been observed in members of that category. For example, an object that has most of the features of birds (e.g., wings, fly, build nests in trees, etc.) and few features of other categories is thought to be a bird. This view of categorization is formalized by prototype models in which classification is a function of the similarity (i.e. , number of shared features) between a mental representation of a category prototype and a to-be-classified object. However , a well-known difficulty with prototype models is that a features contribution to category membership is independent of the presence or absence of other features. In contrast , consideration of a categorys theoretical knowledge is likely to influence which combinations of features make for acceptable category members. For example , people believe that birds have nests in trees because they can fly , and in light of this knowledge an animal that doesnt fly and yet still builds nests in trees might be considered a less plausible bird than an animal that builds nests on the ground and doesnt fly (e.g., an ostrich) even though the latter animal has fewer features typical of birds. To assess whether knowledge in fact influences which feature combinations make for good category members , in the following experiment undergraduates were taught novel categories whose four binary features exhibited either a common-cause or a common-effect schema (Figure 1). In the common-cause schema, one category feature (PI) is described as causing the three other features (F2, F3, and F4). In the common-effect schema one feature (F4) is described as being caused by the three others (F I, F2, and F3). CMT assumes that people represent causal knowledge such as that in Figure 1 as a kind of Bayesian network [4] in which nodes are variables representing binary category features and directed edges are causal relationships representing the presence of probabilistic causal mechanisms between features. Specifically , CMT assumes that when a cause feature is present it enables the operation of a causal mechanism that will, with some probability m , bring about the presence of the effect feature. CMT also allow for the possibility that effect features have potential background causes that are not explicitly represented in the network, as represented by parameter b which is the probability that an effect will be present even when its network causes are absent. Finally, each cause node has a parameter c that represents the probability that a cause feature will be present. Common-Cause Schema ~ ~ ® Common-Effect Schema Figure 1. . (~~) @ . .. : .... ~~:f·""""®1 ®"""::® " .... @/ ®'. F .• 3 Common-Cause Common-Effect Correlations Correlations

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A theory of causal learning in children: causal maps and Bayes nets.

The authors outline a cognitive and computational account of causal learning in children. They propose that children use specialized cognitive systems that allow them to recover an accurate "causal map" of the world: an abstract, coherent, learned representation of the causal relations among events. This kind of knowledge can be perspicuously understood in terms of the formalism of directed gra...

متن کامل

Philosophical Foundations for Causal Networks

Bayes nets are seeing increasing use in expert systems [2, 6], and structural equations models continue to be popular in many branches of the social sciences [1]. Both types of models involve directed acyclic graphs with variables as nodes, and in both cases there is much mysterious talk about causal interpretation. This paper uses probability trees to give precise conditions under which Bayes ...

متن کامل

A Quantum Bayes Net Approach to Causal Reasoning

When individuals have little knowledge about a causal system and must make causal inferences based on vague and imperfect information, their judgments often deviate from the normative prescription of classical probability. Previously, many researchers have dealt with violations of normative rules by elaborating causal Bayesian networks through the inclusion of hidden variables. While these mode...

متن کامل

Managing Venture Capital Investment Decisions: a Knowledge-based Approach

In this study, we build a causal map of the investment decision using causal mapping techniques by interviewing venture capitalists (VCs). We convert this map into a causal Bayes net using techniques we developed. Causal Bayes nets are especially suited for domains characterized by high degree of uncertainty. Bayes nets have been recently developed in artificial intelligence and used in medical...

متن کامل

Doing After Seeing

Causal knowledge serves two functions: it allows us to predict future events on the basis of observations and to plan actions. Although associative learning theories traditionally differentiate between learning based on observations (classical conditioning) and learning based on the outcomes of actions (instrumental conditioning), they fail to express the common basis of these two modes of acce...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2001